Hover over the links for details on some connections (i.e. not all edges have details). Dots indicate ‘special cases of’ the parent node. Undirected connections suggest equivalence (≡ on hover) or a more complex relationship (usually for more general/more complex models). Note this is for a quick reference, not an exhaustive one.
Most matrix factorization techniques can be twisted from one to the other, such that this could essentially be a highly interconnected circular graph.
\[X \approx UV'\] The observed data matrix \(X\) is approximately equal to some function of the ‘factors’ \(U\), and \(V\), possibly weighted (all use equal weights), and generalizing further to include some link function i.e., \(f(UV')\) (I only note EPCA here).
PCA: principal compenents analysis Exponential PCA: exponential principal compenents analysis Discrete CA: Discrete principal compenents analysis FA: ‘factor analysis’ SVD: singular value decomposition LDA: latent dirichlet allocation NMF: non-negative matrix factorization PLSI: also as PLSA, probabilistic latent semantic analysis K-means: k-means cluster approach